133 research outputs found

    Startup dilemmas - Strategic problems of early-stage platforms on the internet

    Get PDF
    siirretty Doriast

    Developing Persona Analytics Towards Persona Science

    Get PDF
    Much of the reported work on personas suffers from the lack of empirical evidence. To address this issue, we introduce Persona Analytics (PA), a system that tracks how users interact with data-driven personas. PA captures users’ mouse and gaze behavior to measure users’ interaction with algorithmically generated personas and use of system features for an interactive persona system. Measuring these activities grants an understanding of the behaviors of a persona user, required for quantitative measurement of persona use to obtain scientifically valid evidence. Conducting a study with 144 participants, we demonstrate how PA can be deployed for remote user studies during exceptional times when physical user studies are difficult, if not impossible.© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.fi=vertaisarvioitu|en=peerReviewed

    Problems of Data Science in Organizations: An Explorative Qualitative Analysis of Business Professionals’ Concerns

    Get PDF
    In this exploratory study, we analyze 150 comments from 79 participants, the roles ranging from top management and other business professionals to software developers, to identify key problems of employing data science in organizations. The comments are retrieved from a publicly available LinkedIn discussion thread in which the participants are discussing the problems relating to data science implementation and management. We use qualitative coding to analyze the comments and find issues from several management-related categories, including (a) job descriptions and recruitment, (b) leadership, (c) economical aspects, and (d) clarity about data use and goals. The findings also highlight that ‘data scientist’ is not just a one role, but combination of many different roles, including analyst, scientist, programmer, and business person. The multiplicity of skills required hinders the recruitment of such individuals, and the existing organizational structures are not always compatible with the multidisciplinary nature of data scientists. We conclude with recommendations to address these issues

    Measuring user interactions with websites : A comparison of two industry standard analytics approaches using data of 86 websites

    Get PDF
    This research compares four standard analytics metrics from Google Analytics with SimilarWeb using one year’s average monthly data for 86 websites from 26 countries and 19 industry verticals. The results show statistically significant differences between the two services for total visits, unique visitors, bounce rates, and average session duration. Using Google Analytics as the baseline, SimilarWeb average values were 19.4% lower for total visits, 38.7% lower for unique visitors, 25.2% higher for bounce rate, and 56.2% higher for session duration. The website rankings between SimilarWeb and Google Analytics for all metrics are significantly correlated, especially for total visits and unique visitors. The accuracy/inaccuracy of the metrics from both services is discussed from the vantage of the data collection methods employed. In the absence of a gold standard, combining the two services is a reasonable approach, with Google Analytics for onsite and SimilarWeb for network metrics. Finally, the differences between SimilarWeb and Google Analytics measures are systematic, so with Google Analytics metrics from a known site, one can reasonably generate the Google Analytics metrics for related sites based on the SimilarWeb values. The implications are that SimilarWeb provides conservative analytics in terms of visits and visitors relative to those of Google Analytics, and both tools can be utilized in a complementary fashion in situations where site analytics is not available for competitive intelligence and benchmarking analysis.© 2022 Jansen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.fi=vertaisarvioitu|en=peerReviewed

    The Effect of Hiding Dislikes on the Use of YouTube's Like and Dislike Features

    Get PDF
    Using data from a major international news organization, we investigate the effect of hiding the count of dislikes from YouTube viewers on the propensity to use the video like/dislike features. We compare one entire month of videos before (n = 478) and after (n = 394) YouTube began hiding the dislikes counts. Collectively, these videos had received 450,200 likes and 41,892 dislikes. To account for content variability, we analyze the likes/dislikes by sentiment class (positive, neutral, negative). Results of chi-square testing show that while both likes and dislikes decreased after the hiding, dislikes decreased substantially more. We repeat the analysis with four other YouTube news channels in various languages (Arabic, English, French, Spanish) and one non-news organization, with similar results in all but one case. Findings from these multiple organizations suggest that YouTube hiding the number of dislikes from viewers has altered the user-platform interactions for the like/dislike features. Therefore, comparing the like/dislike metrics before and after the removal would give invalid insights into users’ reactions to content on YouTube.© AuthorACM 2022. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in WebSci '22: 14th ACM Web Science Conference 2022, http://dx.doi.org/10.1145/3501247.3531546fi=vertaisarvioitu|en=peerReviewed

    Employing large language models in survey research

    Get PDF
    This article discusses the promising potential of employing large language models (LLMs) for survey research, including generating responses to survey items. LLMs can address some of the challenges associated with survey research regarding question-wording and response bias. They can address issues relating to a lack of clarity and understanding but cannot yet correct for sampling or nonresponse bias challenges. While LLMs can assist with some of the challenges with survey research, at present, LLMs need to be used in conjunction with other methods and approaches. With thoughtful and nuanced approaches to development, LLMs can be used responsibly and beneficially while minimizing the associated risks.© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Engineers, Aware! Commercial Tools Disagree on Social Media Sentiment : Analyzing the Sentiment Bias of Four Major Tools

    Get PDF
    Large commercial sentiment analysis tools are often deployed in software engineering due to their ease of use. However, it is not known how accurate these tools are, and whether the sentiment ratings given by one tool agree with those given by another tool. We use two datasets - (1) NEWS consisting of 5,880 news stories and 60K comments from four social media platforms: Twitter, Instagram, YouTube, and Facebook; and (2) IMDB consisting of 7,500 positive and 7,500 negative movie reviews - to investigate the agreement and bias of four widely used sentiment analysis (SA) tools: Microsoft Azure (MS), IBM Watson, Google Cloud, and Amazon Web Services (AWS). We find that the four tools assign the same sentiment on less than half (48.1%) of the analyzed content. We also find that AWS exhibits neutrality bias in both datasets, Google exhibits bi-polarity bias in the NEWS dataset but neutrality bias in the IMDB dataset, and IBM and MS exhibit no clear bias in the NEWS dataset but have bi-polarity bias in the IMDB dataset. Overall, IBM has the highest accuracy relative to the known ground truth in the IMDB dataset. Findings indicate that psycholinguistic features - especially affect, tone, and use of adjectives - explain why the tools disagree. Engineers are urged caution when implementing SA tools for applications, as the tool selection affects the obtained sentiment labels.© Owner/Author(s). ACM 2022. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the ACM on Human-Computer Interaction, https://doi.org/10.1145/3532203.fi=vertaisarvioitu|en=peerReviewed
    • …
    corecore